Use of fast realistic simulations on GPU to extract CAD models from microtomographic data in the presence of strong CT artefacts:

Registration of Tungsten Fibres on XCT Images

by Franck P. Vidal, Iwan T. Mitchell and Jean-Michel Létang

This demo aims to demonstrate how to register polygon meshes onto X-ray microtomography (micro-CT) scans of a tungsten fibre. The code relies on two building blocks:

  1. A global optimisation algorithm. We use the CMA-ES (Covariance Matrix Adaptation Evolution Strategy). It is an evolutionary algorithm for difficult non-linear non-convex optimisation problems.
  2. A fast X-ray simulation toolkit. We use gVirtualXRay. It is a framework supporting many modern programming languages to generate realistic X-ray images from polygon meshes (triangles or tetrahedrons) on the graphics processor unit (GPU).

Below is an example of CT slice from an experiment we carried out at the European Synchrotron Radiation Facility (ESRF).

In a previous article, on Investigation of artefact sources in synchrotron microtomography via virtual X-ray imaging in Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, we demonstrated that the image above was corrupted by:

1) beam hardening depsite the use of a monochromator, 2) the response of the camera despite the point spread function (PSF) being almost a Dirac, and 3) phase contrast.

That study was published in 2005, when computer were still relatively slow. Since then, massively parallel processors such as graphics processor units (GPUs) have emerged. Using today's hardware, we will demonstrate that we can now finely tuned the virtual experiments by mathematical optimisation to register polygons meshes on XCT data. Our simulations will include beam-hardening due to polychromatism, take into account the response of the detector, and have phase contrast.

Registration steps

  1. Initialisation
  2. Simulate the CT acquisition
  3. Registration of the Ti90Al6V4 matrix
  4. Optimisation of the cores and fibres radii
  5. Recentre each core/fibre
  6. Optimisation the radii after recentring
  7. Optimisation of the beam spectrum
  8. Optimisation of the phase contrast and the radii
  9. Optimisation of the phase contrast and the LSF
  10. Optimisation of the Poisson noise
  11. Results in terms of linear attenuation coefficients

In image registration, a moving object is geometrically deformed so that its image matches a target image. The parameters of the deformation is controlled and iteratively tuned by an optimisation algorithm.

In our context, the target is the sinogram provided by the experiment at ESRF. The moving image is created by simulation using the CAD models and gVirtualXRay. The simulation parameters controlling the CAD models are repetitively tuned by a global optimisation algorithm until a stopping criterion is met. The optimisation algorithm will minimise (or maximise) a numerical value, the objective function. The comparison between the target and moving images measures how different (or similar) the two images are. It is performed within the objective function.

Import packages

We need to import a few libraries (called packages in Python). We use:

Global variables

We need some global variables:

Load the image data

Load and display the reference projections from a raw binary file, i.e. the target of the registration.

We define a function to save raw images in the MHA format:

The reference projections in a MHA file

Display the reference projections using Matplotlib

In the literature, a projection is often modelled using the polychromatic version of the Beer-Lambert law: $$\mathbf{I}(x,y) = \sum_i \mathbf{R}_i \, \mathbf{N}_i \; \exp\left({-\sum_j \mu_j(E_i) \; \mathbf{d}_j(x,y)}\right)$$

Projections are then corrected to account for variations in beam homogeneity and in the pixel-to-pixel sensitivity of the detector. This is the projection with flat-field correction ($\mathbf{Proj}$): $$\mathbf{Proj} = \frac{\mathbf{I} - \mathbf{D}}{\mathbf{F} - \mathbf{D}}$$ $\mathbf{F}$ (full fields) and $\mathbf{D}$ (dark fields) are projection images without sample and acquired with and without the X-ray beam turned on respectively. Note that with an ideal detector ($\mathbf{R}_i=E_i$), pixels of $\mathbf{D}$ are null, and pixels of $\mathbf{F}$ are equal to $\sum_i E_i \; \mathbf{N}_i$.

reference_normalised_projections (the figure above) corresponds to the data loaded from the binary file. It corresponds to $\mathbf{Proj}$, i.e. the flat-field correction has already been performed.

We can see that when the primary spectrum is not monochromatic the measurement is the sum of several attenuation laws. We could however compute the effective monochromatic attenuation that would give the same measurement: $$ \mathbf{I}(x,y) = \mathbf{I}_0(x,y) \; \exp\left({-\sum_j \mu_j(E_{\mathrm{eff}}) \; \mathbf{d}_j(x,y)}\right)$$

with $\mathbf{I}_0(x,y) = \sum_i \mathbf{R}_i \, \mathbf{N}_i$, and where $E_{\mathrm{eff}}$ corresponds to the monochromatic energy that would give the same attenuation than the one measured. We are now able to linearise the transmission tomography data, namely $\mathbf{Proj}$, and we get the sinogram: $$\textbf{Sino}=-\ln\left(\textbf{Proj}\right)$$

We define a new function to compute the sinogram from flat-field correction and calls it straightaway.

Compute the sinogram from the flat-field data

Save the corresponding image

Display the sinogram using Matplotlib

CT reconstruction

Now we got a sinogram, we can reconstruct the CT slice. As we used a synchrotron, we can assume we have a parallel source. It means we can use a FBP rather than the FDK algorithm. In fact we use the gridrec algorithm, which is much faster:

Dowd BA, Campbell GH, Marr RB, Nagarkar VV, Tipnis SV, Axe L, and Siddons DP. Developments in synchrotron x-ray computed microtomography at the national synchrotron light source. In Proc. SPIE, volume 3772, 224–236. 1999.

Save the reconstruction in a MHA file

Plot the CT slice using Matplotlib

Normalise the image data

Zero-mean, unit-variance normalisation is applied to use the reference images in objective functions and perform the registration. Note that it is called standardisation (or Z-score Normalisation) in machine learning. It is computed as follows:

$$\mathbf{m}_o=\frac{\mathbf{m}-\bar{m}}{\sigma_m}$$

where $\mathbf{m}_o$ is the image after normalisation of Image $\mathbf{m}$, $\bar{m}$ is the average pixel value of Image $\mathbf{m}$, and $\sigma_m$ its standard deviation. After normalisation, the average pixel value is null and the standard deviation of pixel values is equal to one.

We define a function to apply this:

Normalise the reference sinogram and CT slice

Set the X-ray simulation environment

First we create an OpenGL context, here using EGL, i.e. no window.

We set the parameters of the X-ray detector (flat pannel), e.g. number of pixels, pixel, spacing, position and orientation:

And the source parameters (beam shape, source position)

The beam spectrum. Here we have a polychromatic beam, with 97% of the photons at 33 keV, 2% at 66 keV and 1% at 99 keV.

Plot the beam spectrum using Matplotlib

The material properties (chemical composition and density)

The LSF

In a previous study, we experimentally measured the impulse response of the detector as the line spread function (LSF):

F.P. Vidal, J.M. Létang, G. Peix, P. Cloetens, Investigation of artefact sources in synchrotron microtomography via virtual X-ray imaging, Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, Volume 234, Issue 3, 2005, Pages 333-348, ISSN 0168-583X, DOI 10.1016/j.nimb.2005.02.003.

We use this model during the initial steps of the registration. The LSF model will be tuned in one of the final steps of the registration.

Plot the LSF using Matplotlib

Find circles to identify the centre of fibres

We can use the Hoguh transform to detect where circles are in the image. However, the input image in OpenCV's function must be in UINT. We blur it using a bilateral filter (an edge-preserving smoothing filter).

Convert the image to UINT

We first create a function to convert images in floating point numbers into UINT.

We blur the CT scan using a bilateral filter. It preserves edges.

Apply the Hough transform

As the fibres and the cores correspond to circles in the CT images, the obvious technique to try is the Hough Circle Transform (HCT). It is a feature extraction technique used in image analysis that can output a list of circles (centres and radii).

Overlay the detected circles on the top of the image

13 fibres were missed and many centres were misplaced. Controlling the meta-parameters of the algorithm can be difficult to employ in a fully-automatic registration framework. We will use another technique to register the fibres, the popular Otsu's method. It creates a histogram and uses a heuristic to determine a threshold value.

Clean up

Mark each potential tungsten corewith unique label

Each distinct tungsten core is assigned a unique label, i.e. a unique pixel intensity

Object Analysis

Once we have the segmented objects we look at their shapes and the intensity distributions inside the objects. For each labelled tungsten core, we extract the centroid. Note that sizes and positions are given in millimetres.

We now have a list of the centres of all the fibres that can be used as a parameter of the function below to create the cylinders corresponding to the cores and the fibres. For each core, a cylinder is creatd and translated:

gvxr.emptyMesh("core_"  + str(i));
        gvxr.makeCylinder("core_"  + str(i), number_of_sectors, 815.0,  core_radius, "micrometer");
        gvxr.translateNode("core_"  + str(i), y, 0.0, x, "micrometer");

For each fibre, another cylinder is created and translated:

gvxr.emptyMesh("fibre_"  + str(i));
        gvxr.makeCylinder("fibre_"  + str(i), number_of_sectors, 815.0,  fibre_radius, "micrometer");
        gvxr.translateNode("fibre_"  + str(i), y, 0.0, x, "micrometer");

The fibre's cylinder is hollowed to make space for its core:

gvxr.subtractMesh("fibre_" + str(i), "core_" + str(i));

Registration of a cube

We define a function to create the polygon mesh of the Ti90Al6V4 matrix.

Simulate the CT acquisition

There are 7 successive steps to simulate the XCT data acquisition:

  1. Set the fibre and cores geometries and material properties (Step 45)
  2. Set the matrix geometry and material properties (Step 46)
  3. Simulate the raw projections for each angle:
    • Without phase contrast (Line 9 of Step 49), or
    • With phase contrast (Lines 12-55 of Step 49)
  4. Apply the LSF (Lines 57-60 of Step 49)
  5. Apply the flat-field correction (Line 63 of Step 49)
  6. Add Poison noise (Lines 67-76 of Step 49)(Lines 12-55 of Step 49)
  7. Apply the minus log normalisation to compute the sinogram (Line 79 of Step 49)

Compute the raw projections and save the data. For this purpose, we define a new function.

Flat-filed correction

Because the data suffers from a fixed-pattern noise in X-ray imaging in actual experiments, it is necessary to perform the flat-field correction of the raw projections using:

$$\mathbf{Proj} = \frac{\mathbf{I} - \mathbf{D}}{\mathbf{F} - \mathbf{D}}$$

$\mathbf{F}$ (full fields) and $\mathbf{D}$ (dark fields) are projection images without sample and acquired with and without the X-ray beam turned on respectively.

Note that in our example, raw_projections_in_keV ($\mathbf{I}$), flat_field_image ($\mathbf{F}$) and dark_field_image ($\mathbf{D}$) are in keV whereas corrected_projections ($\mathbf{Proj}$) does not have any unit:

$$0 \leq \mathbf{I} \leq \sum_E N_0(E) \times E\\0 \leq \mathbf{Proj} \leq 1$$

We define a new function to compute the flat-field correction.

The function below is used to simulate a sinogram acquisition. Phase contrast in the projections can be taken into account or not. Also, Poisson noise can be added.

The function below is used quantify the differences between two images. It is used in the objective function.

The function below is the objective function used to register the matrix.

Apply the result of the registration

Simulate the correspond CT acquisition

Display the result of the registration as an animation

View the GIF animation (no preview on GitHub .ipynb files)

View the last frame

Adding the fibres

The radius of a tungsten core is 30 / 2 um. The pixel spacing is 1.9 um. The radius in number of pixels is $15/1.9 \approx 7.89$. The area of a core is $(15/1.9)^2 \pi \approx 196$ pixels.

Optimisation of the cores and fibres radii

The function below is the objective function used to optimise the radii of the cores and fibres.

View the GIF animation (no preview on GitHub .ipynb files)

View the last frame

Apply the result of the registration

The 3D view of the registration looks like:

Recentre each core/fibre

Each fibre is extracted from both the reference CT slice and simulated CT slice. The displacement between the corresponding fibres is computed to maximise the ZNCC between the two. The centre of the fibre is then adjusted accordingly.

Applying the result of recentring

Optimisation the radii after recentring

After recentring the centres, another run of optimisation is executed to refine the radii of the fibres and cores.

View the GIF animation (no preview on GitHub .ipynb files)

View the last frame

Apply the result of the registration

Optimisation of the beam spectrum

View the GIF animation (no preview on GitHub .ipynb files)

View the last frame

Optimisation of the phase contrast and the radii

View the GIF animation (no preview on GitHub .ipynb files)

View the last frame

Apply the result of the registration

Optimisation of the phase contrast and the LSF

View the GIF animation (no preview on GitHub .ipynb files)

View the last frame

Apply the result of the registration

Extract the fibre in the centre of the CT slices

Optimisation of the Poisson noise

Apply the result of the optimisation

Results in terms of linear attenuation coefficients

Reduce the ROI size to focus on a single fibre and its surrounding matrix

Extract the ROIs

Save the ROIs

A function to create a circular binary mask

A function to create the binary masks for the core, fibre and matrix

Create binary masks for the core, fibre and matrix

Save the binary masks

Display the masks

A function to collect all the $\mu$ statistics from the masks.

Get the dataframe with all the values and display it as a table

Save the CAD models and plot them in 3D

Load the STL files

Create the k3d geometries

Create a new plot

Get a screenshot

Save the figure